Predicting visual difference maps for computer‐generated images by integrating human visual system model and deep learning

نویسندگان

چکیده

The quality of images generated by computer graphics rendering algorithms is mainly affected visible distortion at some pixel locations. Image assessment (IQA) metrics are commonly utilized to assess the rendered images, but their results a global difference value, which does not provide pixel-wise differences optimize renderings. In contrast, visibility models including visual perception and deep learning can calculate between distorted reference images. However, they either only applied single type or seriously dependent on datasets. To this end, authors propose novel model, dubbed Human Visual Perception Deep Learning Difference Metric (HPDL-IDM), combines System (HVS) model learning. HPDL-IDM primarily consists two modules: (i) feature calculation module, calculates maps various kinds features extracted from image according characteristics human eyes concatenates them, (ii) utilizes neural network encoder–decoder structure train LocvisVC VisTexRes datasets whose input output these concatenated final map respectively. Additionally, pool into value 0 1 apply many processing tasks related (IQMs). Experimental show that HPDL-IDM's generalization capacity accuracy improved large margin compared other models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Just Noticeable Difference Estimation Using Visual Saliency in Images

Due to some physiological and physical limitations in the brain and the eye, the human visual system (HVS) is unable to perceive some changes in the visual signal whose range is lower than a certain threshold so-called just-noticeable distortion (JND) threshold. Visual attention (VA) provides a mechanism for selection of particular aspects of a visual scene so as to reduce the computational loa...

متن کامل

Integrating Visual Learning Within a Model-based ATR System

Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a...

متن کامل

Learning to Learn Visual Object Categories by Integrating Deep Learning with Hierarchical Bayes

Humans are capable of generalizing and learning new concepts after very little experience. They have the ability to create semantic structures from concepts they acquire, they can learn appropriate inductive biases that are later used as priors for different tasks, and they can learn novel categories from very few examples. While recent advances in neural networks and other machine learning met...

متن کامل

Temporal summation of moving images by the human visual system.

Measurements of threshold visibility were made as a function of duration of stimulus exposure for small moving dot targets, drifting sinusoidal gratings and moving patches of sinusoidal gratings, to investigate how the human visual nervous system summates over time signals arising from stimuli in motion. At image speeds of less that 16 deg/s, temporal summation is as strong and as extended for ...

متن کامل

Learning Visual Symbols for Parsing Human Poses in Images

Parsing human poses in images is fundamental in extracting critical visual information for artificial intelligent agents. Our goal is to learn selfcontained body part representations from images, which we call visual symbols, and their symbolwise geometric contexts in this parsing process. Each symbol is individually learned by categorizing visual features leveraged by geometric information. In...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Iet Image Processing

سال: 2022

ISSN: ['1751-9659', '1751-9667']

DOI: https://doi.org/10.1049/ipr2.12681